human grader
LLM-as-a-Grader: Practical Insights from Large Language Model for Short-Answer and Report Evaluation
Byun, Grace, Rajwal, Swati, Choi, Jinho D.
Large Language Models (LLMs) are increasingly explored for educational tasks such as grading, yet their alignment with human evaluation in real classrooms remains underexamined. In this study, we investigate the feasibility of using an LLM (GPT-4o) to evaluate short-answer quizzes and project reports in an undergraduate Computational Linguistics course. We collect responses from approximately 50 students across five quizzes and receive project reports from 14 teams. LLM-generated scores are compared against human evaluations conducted independently by the course teaching assistants (TAs). Our results show that GPT-4o achieves strong correlation with human graders (up to 0.98) and exact score agreement in 55\% of quiz cases. For project reports, it also shows strong overall alignment with human grading, while exhibiting some variability in scoring technical, open-ended responses. We release all code and sample data to support further research on LLMs in educational assessment. This work highlights both the potential and limitations of LLM-based grading systems and contributes to advancing automated grading in real-world academic settings.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
Towards Human-Like Grading: A Unified LLM-Enhanced Framework for Subjective Question Evaluation
Zhua, Fanwei, He, Jiaxuan, Chen, Xiaoxiao, Chen, Zulong, Lu, Quan, Mei, Chenrui
Automatic grading of subjective questions remains a significant challenge in examination assessment due to the diversity in question formats and the open-ended nature of student responses. Existing works primarily focus on a specific type of subjective question and lack the generality to support comprehensive exams that contain diverse question types. In this paper, we propose a unified Large Language Model (LLM)-enhanced auto-grading framework that provides human-like evaluation for all types of subjective questions across various domains. Our framework integrates four complementary modules to holistically evaluate student answers. In addition to a basic text matching module that provides a foundational assessment of content similarity, we leverage the powerful reasoning and generative capabilities of LLMs to: (1) compare key knowledge points extracted from both student and reference answers, (2) generate a pseudo-question from the student answer to assess its relevance to the original question, and (3) simulate human evaluation by identifying content-related and non-content strengths and weaknesses. Extensive experiments on both general-purpose and domain-specific datasets show that our framework consistently outperforms traditional and LLM-based baselines across multiple grading metrics. Moreover, the proposed system has been successfully deployed in real-world training and certification exams at a major e-commerce enterprise.
- Asia > China > Zhejiang Province > Hangzhou (0.04)
- Asia > Middle East > Qatar > Ad-Dawhah > Doha (0.04)
- Education > Assessment & Standards > Student Performance (1.00)
- Education > Educational Technology > Educational Software (0.95)
- Education > Educational Setting (0.69)
Skewed Score: A statistical framework to assess autograders
Dubois, Magda, Coppock, Harry, Giulianelli, Mario, Flesch, Timo, Luettgau, Lennart, Ududec, Cozmin
The evaluation of large language model (LLM) outputs is increasingly performed by other LLMs, a setup commonly known as "LLM-as-a-judge", or autograders. While autograders offer a scalable alternative to human evaluation, they have shown mixed reliability and may exhibit systematic biases, depending on response type, scoring methodology, domain specificity, or other factors. Here we propose a statistical framework based on Bayesian generalised linear models (GLMs) that enables researchers to simultaneously assess their autograders while addressing their primary research questions (e.g., LLM evaluation). Our approach models evaluation outcomes (e.g., scores or pairwise preferences) as a function of properties of the grader (e.g., human vs. autograder) and the evaluated item (e.g., response length or the LLM that generated it), allowing for explicit quantification of scoring differences and potential biases within a unified framework. In addition, our method can be used to augment traditional metrics such as inter-rater agreement, by providing uncertainty estimates and clarifying sources of disagreement. Overall, this approach contributes to more robust and interpretable use of autograders in LLM evaluation, enabling both performance analysis and bias detection.
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.14)
- Asia > Thailand > Bangkok > Bangkok (0.04)
- North America > Mexico > Mexico City > Mexico City (0.04)
- (6 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study > Negative Result (0.34)
ChatGPT for automated grading of short answer questions in mechanical ventilation
Standardised tests using short answer questions (SAQs) are common in postgraduate education. Large language models (LLMs) simulate conversational language and interpret unstructured free-text responses in ways aligning with applying SAQ grading rubrics, making them attractive for automated grading. We evaluated ChatGPT 4o to grade SAQs in a postgraduate medical setting using data from 215 students (557 short-answer responses) enrolled in an online course on mechanical ventilation (2020--2024). Deidentified responses to three case-based scenarios were presented to ChatGPT with a standardised grading prompt and rubric. Outputs were analysed using mixed-effects modelling, variance component analysis, intraclass correlation coefficients (ICCs), Cohen's kappa, Kendall's W, and Bland--Altman statistics. ChatGPT awarded systematically lower marks than human graders with a mean difference (bias) of -1.34 on a 10-point scale. ICC values indicated poor individual-level agreement (ICC1 = 0.086), and Cohen's kappa (-0.0786) suggested no meaningful agreement. Variance component analysis showed minimal variability among the five ChatGPT sessions (G-value = 0.87), indicating internal consistency but divergence from the human grader. The poorest agreement was observed for evaluative and analytic items, whereas checklist and prescriptive rubric items had less disagreement. We caution against the use of LLMs in grading postgraduate coursework. Over 60% of ChatGPT-assigned grades differed from human grades by more than acceptable boundaries for high-stakes assessments.
- Research Report > New Finding (0.68)
- Research Report > Experimental Study (0.46)
- Education > Educational Technology > Educational Software > Computer-Aided Assessment (1.00)
- Education > Educational Setting (1.00)
SteLLA: A Structured Grading System Using LLMs with RAG
Qiu, Hefei, White, Brian, Ding, Ashley, Costa, Reinaldo, Hachem, Ali, Ding, Wei, Chen, Ping
Large Language Models (LLMs) have shown strong general capabilities in many applications. However, how to make them reliable tools for some specific tasks such as automated short answer grading (ASAG) remains a challenge. We present SteLLA (Structured Grading System Using LLMs with RAG) in which a) Retrieval Augmented Generation (RAG) approach is used to empower LLMs specifically on the ASAG task by extracting structured information from the highly relevant and reliable external knowledge based on the instructor-provided reference answer and rubric, b) an LLM performs a structured and question-answering-based evaluation of student answers to provide analytical grades and feedback. A real-world dataset that contains students' answers in an exam was collected from a college-level Biology course. Experiments show that our proposed system can achieve substantial agreement with the human grader while providing break-down grades and feedback on all the knowledge points examined in the problem. A qualitative and error analysis of the feedback generated by GPT4 shows that GPT4 is good at capturing facts while may be prone to inferring too much implication from the given text in the grading task which provides insights into the usage of LLMs in the ASAG system.
- North America > United States > Washington > King County > Seattle (0.14)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > United States > Massachusetts > Suffolk County > Boston (0.14)
- (9 more...)
Evaluating GPT-4 at Grading Handwritten Solutions in Math Exams
Caraeni, Adriana, Scarlatos, Alexander, Lan, Andrew
Recent advances in generative artificial intelligence (AI) have shown promise in accurately grading open-ended student responses. However, few prior works have explored grading handwritten responses due to a lack of data and the challenge of combining visual and textual information. In this work, we leverage state-of-the-art multi-modal AI models, in particular GPT-4o, to automatically grade handwritten responses to college-level math exams. Using real student responses to questions in a probability theory exam, we evaluate GPT-4o's alignment with ground-truth scores from human graders using various prompting techniques. We find that while providing rubrics improves alignment, the model's overall accuracy is still too low for real-world settings, showing there is significant room for growth in this task.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.35)
Large Language Models for Medical OSCE Assessment: A Novel Approach to Transcript Analysis
Shakur, Ameer Hamza, Holcomb, Michael J., Hein, David, Kang, Shinyoung, Dalton, Thomas O., Campbell, Krystle K., Scott, Daniel J., Jamieson, Andrew R.
Grading Objective Structured Clinical Examinations (OSCEs) is a time-consuming and expensive process, traditionally requiring extensive manual effort from human experts. In this study, we explore the potential of Large Language Models (LLMs) to assess skills related to medical student communication. We analyzed 2,027 video-recorded OSCE examinations from the University of Texas Southwestern Medical Center (UTSW), spanning four years (2019-2022), and several different medical cases or "stations." Specifically, our focus was on evaluating students' ability to summarize patients' medical history: we targeted the rubric item 'did the student summarize the patients' medical history?' from the communication skills rubric. After transcribing speech audio captured by OSCE videos using Whisper-v3, we studied the performance of various LLM-based approaches for grading students on this summarization task based on their examination transcripts. Using various frontier-level open-source and proprietary LLMs, we evaluated different techniques such as zero-shot chain-of-thought prompting, retrieval augmented generation, and multi-model ensemble methods. Our results show that frontier LLM models like GPT-4 achieved remarkable alignment with human graders, demonstrating a Cohen's kappa agreement of 0.88 and indicating strong potential for LLM-based OSCE grading to augment the current grading process. Open-source models also showed promising results, suggesting potential for widespread, cost-effective deployment. Further, we present a failure analysis identifying conditions where LLM grading may be less reliable in this context and recommend best practices for deploying LLMs in medical education settings.
Unveiling Scoring Processes: Dissecting the Differences between LLMs and Human Graders in Automatic Scoring
Wu, Xuansheng, Saraf, Padmaja Pravin, Lee, Gyeong-Geon, Latif, Ehsan, Liu, Ninghao, Zhai, Xiaoming
Large language models (LLMs) have demonstrated strong potential in performing automatic scoring for constructed response assessments. While constructed responses graded by humans are usually based on given grading rubrics, the methods by which LLMs assign scores remain largely unclear. It is also uncertain how closely AI's scoring process mirrors that of humans, or if it adheres to the same grading criteria. To address this gap, this paper uncovers the grading rubrics that LLMs used to score students' written responses to science tasks and their alignment with human scores. We also examine whether enhancing the alignments can improve scoring accuracy. Specifically, we prompt LLMs to generate analytic rubrics that they use to assign scores and study the alignment gap with human grading rubrics. Based on a series of experiments with various configurations of LLM settings, we reveal a notable alignment gap between human and LLM graders. While LLMs can adapt quickly to scoring tasks, they often resort to shortcuts, bypassing deeper logical reasoning expected in human grading. We found that incorporating high-quality analytical rubrics designed to reflect human grading logic can mitigate this gap and enhance LLMs' scoring accuracy. These results caution against the simplistic application of LLMs in science education and highlight the importance of aligning LLM outputs with human expectations to ensure efficient and accurate automatic scoring.
- South America > Uruguay > Maldonado > Maldonado (0.04)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- Education > Educational Technology (0.46)
- Education > Educational Setting (0.46)
Autograding Mathematical Induction Proofs with Natural Language Processing
Zhao, Chenyan, Silva, Mariana, Poulsen, Seth
Writing mathematical proofs has been identified as an important [1-3] and yet challenging topic [4] in computing education and mathematics education. A large body of research has shown that timely feedback is crucial to student learning [5, 6]. However, students are largely unable to receive timely feedback on written proofs due to the need to have proofs collected and hand-graded by instructors or teaching assistants. The ability to grade student proofs fully automatically with natural language processing (NLP) alleviates this need by allowing us to give students instant feedback on their proofs to let students iteratively enhance the quality of their proofs. In this paper, we propose a novel set of training methods and models capable of autograding freeform mathematical proofs, a problem at the intersection of mathematical proof education and Automatic Short Answer Grading (ASAG), by using existing NLP models and other machine learning techniques. Our proof autograder enables the development of grading systems that provide instant feedback to students without needing attention from instructors. It can also be deployed in large-scale educational platforms, allowing for more access for students. The main contributions of this paper are: Introducing the first pipeline of machine learning models capable of autograding mathematical proofs with similar accuracy to human graders Quantifying the amount of training data needed to achieve a satisfactory performance from the grading models Publishing an anonymized and labeled mathematical proof dataset that can be used in future model developments [7] Creating a set of autograded problems using the grading pipeline, and performing a user study that answers the following research questions: - Are students able to write better proofs by interacting with the autograder and the feedback it generates?
- North America > United States > New York > New York County > New York City (0.05)
- North America > United States > Illinois > Champaign County > Urbana (0.04)
- North America > United States > Utah (0.04)
- (4 more...)
- Research Report > New Finding (1.00)
- Questionnaire & Opinion Survey (1.00)
- Research Report > Experimental Study > Negative Result (0.48)
- Education > Curriculum > Subject-Specific Education (0.88)
- Education > Educational Setting (0.66)
- Education > Assessment & Standards (0.66)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.67)
Beyond human subjectivity and error: a novel AI grading system
Gobrecht, Alexandra, Tuma, Felix, Möller, Moritz, Zöller, Thomas, Zakhvatkin, Mark, Wuttig, Alexandra, Sommerfeldt, Holger, Schütt, Sven
The grading of open-ended questions is a high-effort, high-impact task in education. Automating this task promises a significant reduction in workload for education professionals, as well as more consistent grading outcomes for students, by circumventing human subjectivity and error. While recent breakthroughs in AI technology might facilitate such automation, this has not been demonstrated at scale. It this paper, we introduce a novel automatic short answer grading (ASAG) system. The system is based on a fine-tuned open-source transformer model which we trained on large set of exam data from university courses across a large range of disciplines. We evaluated the trained model's performance against held-out test data in a first experiment and found high accuracy levels across a broad spectrum of unseen questions, even in unseen courses. We further compared the performance of our model with that of certified human domain experts in a second experiment: we first assembled another test dataset from real historical exams - the historic grades contained in that data were awarded to students in a regulated, legally binding examination process; we therefore considered them as ground truth for our experiment. We then asked certified human domain experts and our model to grade the historic student answers again without disclosing the historic grades. Finally, we compared the hence obtained grades with the historic grades (our ground truth). We found that for the courses examined, the model deviated less from the official historic grades than the human re-graders - the model's median absolute error was 44 % smaller than the human re-graders', implying that the model is more consistent than humans in grading. These results suggest that leveraging AI enhanced grading can reduce human subjectivity, improve consistency and thus ultimately increase fairness.
- North America > United States > Texas (0.14)
- North America > United States > Oregon > Multnomah County > Portland (0.04)
- Europe > Greece > Attica > Athens (0.04)
- Asia > China > Hubei Province > Wuhan (0.04)
- Instructional Material > Course Syllabus & Notes (0.88)
- Research Report > New Finding (0.88)